27 research outputs found

    Fully generated scripted dialogue for embodied agents

    Get PDF
    This paper presents the NECA approach to the generation of dialogues between Embodied Conversational Agents (ECAs). This approach consist of the automated construction of an abstract script for an entire dialogue (cast in terms of dialogue acts), which is incrementally enhanced by a series of modules and finally ''performed'' by means of text, speech and body language, by a cast of ECAs. The approach makes it possible to automatically produce a large variety of highly expressive dialogues, some of whose essential properties are under the control of a user. The paper discusses the advantages and disadvantages of NECA's approach to Fully Generated Scripted Dialogue (FGSD), and explains the main techniques used in the two demonstrators that were built. The paper can be read as a survey of issues and techniques in the construction of ECAs, focusing on the generation of behaviour (i.e., focusing on information presentation) rather than on interpretation

    Predicting dialogue acts for a speech-to-speech translation system

    Get PDF
    We present the application of statistical language modeling methods for the prediction of the next dialogue act. This prediction is used by different modules of the speech-to-speech translation system VERBMOBIL. The statistical approach uses deleted interpolation of n-gram frequencies as basis and determines the interpolation weights by a modified version of the standard optimization algorithm. Additionally, we present and evaluate different approaches to improve the prediction process, e.g. including knowledge from a dialogue grammar. Evaluation shows that including the speaker information and mirroring the data delivers the best results

    What Are They Going to Talk About? Towards Life- Like Characters that Reflect on Interactions with Users

    No full text
    Abstract. We first introduce CrossTalk, an interactive installation with animated presentation characters that has been designed as an interactive installation for public spaces, such as an exhibition, or a trade fair. The installation relies on what we call a meta-theater metaphor. Quite similar to professional actors, characters in CrossTalk are not always on duty. Rather, they can step out of their roles, and amuse the user with unexpected intermezzi and rehearsal periods. From the point of view of interactive story telling, CrossTalk comprises at least two interesting aspects. Firstly, it smoothly combines manual scripting of character behavior with an approach for automated script generation. Secondly, the system maintains a context memory that enables the characters to adapt to user feedback and to reflect on previous encounters with users. The context memory is our first step towards characters that develop their own history based on their interaction experiences with users. In this paper we briefly describe our approach for the authoring of adaptive, interactive performances, and sketch our ideas to enrich conversations among the characters by having them reflect on their own experiences.

    T.: Authoring scenes for adaptive, interactive performances

    No full text
    In this paper, we introduce a toolkit called SceneMaker for authoring scenes for adaptive, interactive performances. These performances are based on automatically generated and prescripted scenes which can be authored with the SceneMaker in a two-step approach: In step one, the scene flow is defined using cascaded finite state machines. In a second step, the content of each scene must be provided. This can be done either manually by using a simple scripting language, or by integrating scenes which are automatically generated at runtime based on a domain and dialogue model. Both scene types can be interweaved in our planbased, distributed platform. The system provides a context memory with access functions that can be used by the author to make scenes user-adaptive. Using CrossTalk as the target application, we describe our models and languages, and illustrate the authoring process. CrossTalk is an interactive installation with animated presentation agents which “live ” beyond the actual presentation and systematically step out of character within the presentation, both to enhance the illusion of life. The context memory enables the system to adapt to user feedback and generates data for later evaluation of user/system behavior. The SceneMaker toolkit should enable the non-expert to compose adaptive, interactive performances in a rapid prototyping approach
    corecore